perm filename BACKUP[NUM,DBL] blob sn#145984 filedate 1975-02-18 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00026 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	.DEVICE XGP
C00004 00003	.PORTION TITLEPAGE
C00005 00004	2↓_CONTENTS_↓*
C00006 00005	2↓_SUMMARY_↓*
C00010 00006	2↓_1. IDEAS_↓*
C00015 00007	5↓_A Proposed System_↓*
C00032 00008	5↓_Desired Behaviors_↓*
C00036 00009	5↓_Results_↓*
C00039 00010	2↓_2. INTERNAL ACTIVITY_↓*
C00056 00011	5The ENVIRONMENT*
C00073 00012	2↓_3. INITIAL KNOWLEDGE_↓*
C00083 00013	5↓_Initial Knowledge: Level 1_↓*
C00090 00014	5↓_Representation: Level 2_↓*
C00098 00015	4↓_Parts which every BEING might have:_↓*
C00107 00016	INTUITIVE KNOWLEDGE
C00112 00017	5Initial Knowledge: Level 2*
C00119 00018	5Initial Knowledge: Level 3*
C00121 00019		This data is found in the accompanying document, 4Given Knowledege*.
C00122 00020	2↓_4. COMMUNICATION_↓*
C00123 00021	2↓_5.EXAMPLES_↓*
C00124 00022	5Example 1: Contemplation forming links, intuitive conjectures*
C00125 00023	5Example 2: Discovering and developing a family of analogies*
C00126 00024	5Example 3: Formally Investigating an intuitively believed conjecture*
C00130 00025	2↓_6. BIBLIOGRAPHY_↓*
C00143 00026	5ARTICLES*
C00148 ENDMK
C⊗;
.DEVICE XGP
.!XGPCOMMANDS←"/TMAR=30/PMAR=2130/BMAR=40"

.FONT 1 "BASL30"
.FONT 2 "BDR40"
.FONT 4  "BASI30"
.FONT 5 "BASB30"
.FONT 6 "NGR25"
.FONT 7  "NGR20"
.FONT 8 "GRFX35"
.TURN ON "↑↓_π{"
.TURN ON "⊗" FOR "%"
.PAGE FRAME 50 HIGH 84 WIDE
.AREA TEXT LINES 4 TO 48
.AREA HEADING LINES 1 TO 3
.AREA FOOTING LINE 50
.!XGPLFTMAR←200
.SPACING 55 MILLS
.PREFACE 160 MILLS
.NOFILL
.PREFACE 45 MILLS
.FILL
.COUNT PAGE PRINTING "1"
.PAGE←0
.NEXT PAGE
.MACRO B ⊂ BEGIN VERBATIM GROUP ⊃
.MACRO E ⊂ APART END ⊃
.TABBREAK
.EVERY FOOTING(,⊗7{DATE},)
.PORTION TITLEPAGE
.BEGIN CENTER RETAIN
⊗2THEORY  FORMATION:⊗*

⊗6A Proposal for⊗*

⊗2A  SYSTEM  WHICH  CAN  DEVELOP
MATHEMATICAL  CONCEPTS  INTUITIVELY⊗*
.GROUP SKIP 10
.NOFILL
⊗5DOUG LENAT

AVRA COHN



STANFORD UNIVERSITY
ARTIFICIAL INTELLIGENCE LABORATORY

⊗*




Third  Sketch


⊗4Not for distribution⊗*
.END
.NEXT PAGE
.ONCE CENTER
⊗2↓_CONTENTS_↓⊗*

.GROUP SKIP 10

.BEGIN NOFILL PREFACE 150 MILLS INDENT 8
0. Summary
1. Fundamental ideas
2. Internal Structure and Activity
3. Initial Knowledge in the system
4. Communication with the user
5. A few examples
6. Bibliography
.END
.FILL
.EVERY HEADING(⊗5MATH THEORY FORMATION⊗*,,⊗4Doug Lenat⊗*)
.EVERY FOOTING(⊗7{DATE},,page {PAGE})
.PAGE←0
.NEXT PAGE
.ONCE CENTER
⊗2↓_SUMMARY_↓⊗*

.GROUP SKIP 5

	The methods of mathematical creativity are being studied.  More than
merely a taxonomy of theory formation, this project is the foundation for a
system which will learn -- and do -- mathematics.  Initially, this system will
possess some general strategies and some powerful but opaque "intuitive" ability
(e.g., the ability to simulate physical experiments and perceive the results). 

A ⊗4contemplative⊗* period will allow the system to develop concrete facts about its
world and specific techniques (by combining general strategies with observations).
The system would also combine its intuitions to form plausible "theorems"; when
definitions are later provided formally, these will be proposed, and the intuitive
justification will already be present.
The activities in this period are 
expected to be universal, not limited to any single
domain of mathematics.  

After a while, the system will request  inputs from
a human user, in what is to be an ⊗4assimilative⊗* phase. 
These teachings should be the
core definitions of a specific field, and of course should be based on what is 
already mastered. The first experiences could be in set theory, Boolean algebra,
abstract algebra, logic, or
arithmetic. 

Finally, the ⊗4investigative⊗* mode would prevail. In conjunction with a
human adviser, the system would propose and explore interesting new relationships,
decide which creations to name, explore the intuitive meanings of statements, etc.

The driving and pruning forces in all phases are aesthetics and utility. 
The central mechanisms are 
families of BEINGs which live and work in a powerful control environment.

.BEGIN SELECT 6 SPACING 20 MILLS PREFACE 100 MILLS SKIP 3

BEINGs are uniformly structured knowledge modules. (see Green et al.,
⊗4Progress Report on Program-Understanding Systems⊗*, Memo AIM-240,
CS Report STAN-CS-74-444, Artificial Intelligence Laboratory, Stanford
University, August, 1974; or see Lenat, D., ⊗4BEINGS: Knowledge as
Interacting Experts⊗*, forthcoming.)

They are similar to ACTORs, except that BEINGs have structure. Each BEING
in a family has the same set of parts; all parts with the same name have the same
structure (format). This preserves some advantages of both uniformity 
and structure.
.END
.NEXT PAGE
⊗2↓_1. IDEAS_↓⊗*

Throughout all of science, one of the most important issues is that of
theory formation: how to extend, when to define, what to examine next,
how to recognize potentially related concepts, how to tie such concepts
together productively,
how to use intuition, how to choose, when to give up and try another
approach.  These questions are difficult to answer precisely, even in a
single domain.  Problems with natural language, with experimental
apparatus, and
with subjects which are complex yet poorly-structured,
all becloud the answers.  By restricting the domain of attention to 
⊗4mathematics⊗*, we hope to avoid these difficulties.

A ⊗4solution⊗* to this task would mean successfully
accounting for the ⊗4driving⊗* and the ⊗4pruning⊗* forces which
result in interesting
mathematical theories being developed. Success
could be measured in operational terms, by applying these forces to
various domains of mathematics, and comparing the results to what is
already known in those fields.

The ideas explored here are that:

(i) These forces are aesthetics,
utility, intuition, analogy, inductive inference (based on empirical evidence), 
and deductive inference.

(ii) Each of these forces is useful both in generating new conjectures, and
in assessing their acceptability.

(iii) If the essence of these ideas can be factored out into an explicit set
(of rules, predicates, BEINGs, programs...), then they can be used to
develop almost any branch of mathematics, at almost any level.

(iv) A protocol was taken, and indicates that the researcher must have a very
good set of strategies, organize them carefully, and use them wisely
to avoid getting bogged down in barren
pursuits. Some of this wisdom must pertain to precisely what is to be
remembered/recorded: a surfeit is bewildering, a shortage dangerous.

(v) Each mathematical concept should be represented in several ways, 
including declarative, operational, exemplary (especially boundary
examples), and intuitive.

(vi) A large foundation of intuition, spanning several mathematical and real-
world concepts, is prerequisite to sophisticated behavior in ⊗4any⊗*
branch of mathematics.  It is not "cheating" to demand some intuitive
concept of sets, before studying number theory, nor to demand some
intuitive understanding of counting before studying set theory, provided the
intuition is ⊗4opaque⊗* (can be used but not inspected in detail)
and fallible.

(vii) The vast amount of necessary initial knowledge can be 
generated from a much smaller
core of intuition and definite facts, using the same collection of
strategies and wisdom as 
the ones which also do the discovery
and the development
(those outlined above in (iv)).

(viii) The more basic the initial concepts, the more chance there is that the 
research will go off in directions different from humans; and the more
chance it will be a waste of time. 

.SKIP TO COLUMN 1
⊗5↓_A Proposed System_↓⊗*

Let us consider now what would be the characteristics of
a man-machine system which could be used  experimentally. The system would
start with a core of intuition, several  strategies, knowledge of how to
use the facts and the strategies, an ability to evaluate an entity's
interest and its sureity.
It would think to itself awhile, prducing purely intutive "universal" relationships,
until the interest level of that activity fell to a low level.
Since these activities don't utilize any alien authority, 
this stage can be programmed and run
even before any natural communication system is designed.

Eventually, the system's model of the user would indicate that his
guidance, though slow and errorful, was preferable to continue this wandering
development. The system might ask for specific information relating to the
concepts it had discovered the best intutive "theorems" about, or might simply
request tutoring in any domain of the user's choosing.

The human user's
first task would be to input a body of concepts about a specific domain
(for each, he should provide definitions, examples, intuitive pictures,
etc.) Then the system will begin exploring that domain, using its
(hopefully universal) body of mathematical strategies.  Occasionally, the
user may interact with the system.  Occasionally, the system may do
something interesting.  The following ideas are fairly concrete, dealing
with such a system.

(i) A system, if containing modules for each driving and pruning force,
should operate even if some of these forces have been
"turned off,"  so long as ⊗4any⊗* of the modules remain
enabled. ⊗6 For example, if all but
the formal manipulation knowledge is removed, the
system should still grind out
(simple) proofs. If all but the analogy and intuition modules are excised,
some plausible (but uncertain) conjectures should still be produced and
built upon.   While the first situation is not planned intentionally, the
second one is actually what we mean by the contemplative phase.⊗*

(ii) The human working with the system has several roles. First, he must
determine what domain of mathematics is to be examined, what is to be
assumed as known, etc.  Second, he must guide the system, by suggestion
or discouragement, to avoid (probably) fruitless investigations, and to
concentrate on desirable topics.
Third, he might be called on as absolute authority, to provide a needed
fact (e.g., a theorem from another domain) at just the right time.
Ultimately, he might become a co-researcher, when and if the system is
operating in a domain unknown to him.

(iii) In what sense will the system justify what it believes? The aim is
not to build a theorem-prover, yet the leap from "very probable" to 
"certain" is not ignorable.  Many statements are infinitely probable yet
silly (e.g., ⊗6given a number x, choose numbers y at random. The probability
that y > x is unity⊗*.)  Some sophisticated studies into this problem have
been done [Pietarinen] and may prove usable. The job of proving an assertion
should be made much easier by the presence of intuitive understanding. If
a constructive proof is available, the necessary materials will already be
sketched out for the formal methods to build upon.
.BEGIN SELECT 6 SPACING 20 MILLS PREFACE 110 MILLS
	The mechanism for belief in each fact, its certainty, should be
descriptive (a collection of supporting reasons) with a vector of numerical
probabiities (estimated for each factor) attached. These numbers
would be computed at creation of this
entity, recomputed only as required.   The most fundamental entities may
have ⊗4only⊗* numerical weights. 
If the weight of any entity changes, no "chasing
around" need be done. Contradictions are not
catastrophic: they simply indicate that the reasons supporting each of the
conflicting ideas should be reexamined, their intuitive and formal
justifications scrutinized, until the "sum" of the ultimate beliefs in
the contradictory statements falls below unity, and until some intuitive
visualization of the situation is accepted.
If this never happens, then a problem really exists here, and might
have to be assimilated as an exception to some rule, might decrease the
reliability placed on certain facts and methods, etc.
This algorithm, whatever its details, should be embedded implicitly in the
control ⊗4environment⊗*; the system should not have access to inspect or
modify it.
.END

(iv) The communication between the system and the human should be in a
language suited to the particular role he is playing. Thus there can be 
some formal language, some traditional math notation language, some
pictorial language, etc.  Although efficiency will demand a fixed
syntax and semantics for each of these, a trial protocol has indicated
that the typical form of mathematical communication is well defined;
(i.e., it should be feasible to construct formal languages in this
domain, for which the user will not need much prior training).

(v) The following diagram indicates the (traditional) logical progression
of domains in mathematics, and the system should be able to start almost
anywhere and move forward (following the arrows).
Movement backward might be possible, and in some cases may be quite smooth.
This is because the psychological progression does not mirror the logical
progression.
.BEGIN NOFILL GROUP SKIP 1 SELECT 8 TURN OFF "↑↓_" PREFACE 0 MILLS

Elementary Logic  ααα→  Theorem-Proving  ααααααααααααααα⊃
    ↑							~
    ~							~
    ~							~
    εααααααααααα→  Geometry  ααα→  Topology		~
    ~		    ~			~		~
    ~		    ~			~		~
    ~		    ↓			↓		~
    ~      Analytic Geometry       Algebraic Topology	~
    ~		  ↑  			↑		~
    ~             ~  			~		~
    ↓	          ~  			~		~
Boolean Algebra  αβα→    Abstract Algebra 		~
    ↑             ~       ~				↓
    ~	          ~       ~		  Program Verification
    ~	    Analysis      ↓				↑
    ~		   ↑     Concrete Algebra		~
    ~              ~      ↑				~
    ~		   ~      ~				~
    ↓		   ~      ~				~
Set Theory  ααα→  Arithmetic  ααα→  Number Theory	~
		      ~					~
		      ~					~
		      ↓					~
		Combinatorics  ←ααα→  Graph Theory  αααα$

.E

(vi) Advancement in field x
should be much swifter if field y is mastered already, regardless which
fields x and y represent.  

(vii) To start in a particular field, there must be much 
intuition, and some definite facts, about each preceding ("⊗8αα→⊗*"
in the diagram above)
domain of mathematics.  For this reason, we expect to start with
logic, set theory, Boolean algebra, or arithmetic, and move from one of these 
to another, or move along the arrows in the diagram. 
The progression to number theory is the tentative choice for an advanced thrust.

(viii) Since the precise strategies are so crucial, it might be
advantageous to allow them to evolve. This includes changing the system's
notion of interestingness as its experience grows. Such an ability is so slippery
that the system is tentatively planned ⊗4not⊗* to have complete power here.
The strategies may be inspected and changed, just like specific facts, but
all notions of interestingness, belief, safety, difficulty, etc. are fixed for
the system by its creator. If they are unsatisfactory, he must retune them.

(ix) It seems desirable to use a single representation for all of these: 
specific knowledge about objects, 
operators, conjectures, properties, and communication;
and strategies for dealing with each type of
this specific knowledge. THe "fixed" wisdom of how to home in on the most
relevant strategies at any given time perhaps can be stored in a more efficent
manner (probably will be completely opaque to the system).
A family of BEINGs will be designed for each of the specific
knowledge and strategy categories; each family
will have its own set of BEING parts. The parts of a specific knowledge BEING
will relate all the various kinds of things one can know about a single
mathematical concept (Usage, Boundary examples, Name,...); the parts of a
strategy BEING will contain guides for filling in parts of specific information
BEINGs (organized into a group of rules for each specific BEING part).

(x) Control in the system will involve zeroing in on a relevant part of a relevant
BEING, then using strategies dealing with such a part to work on it. The proposed
control algorithm is fairly complicated; several kinds of decisions are routinely
required. The family of BEINGs is determined, the subfamily, then the
specific BEING. The group of parts relevant is determined, followed by the
specific part. The specializing choice is typically made by the currently
available group. For example, after determining that one of the Proof
BEINGs is relevant, the system lets all the proof BEINGs fight it out and
decide which of them is most relevant. This diffusion of decisive power is
common in human activites but surprisingly rare in computer programs.
Let us suppose that somehow we have selected that part p of BEING b must be
filled in (e.g., we are trying to build up an analogy and the analogous
part is already filled in in the other BEING).
Strategies associated with that kind
of part for that family of BEING will then be run, and will attempt to
fill in the part p. This may result in new discoveries, in new BEINGs being
created, in failure, in fully filling in p, or in partially filling it in
but stopping because some new fact was encountered which might be more
interesting.

(xi) The basic mechanism is thus the filling in and the running of BEINGs.
But BEING parts are generally procedural knowledge, so this task really means
automatic code synthesis. Knowledge is stored with an idea toward future usage,
both in where it is placed and how it is recorded.
coded into the system (need ⊗4not⊗* be BEINGs).

(xii) The user might be represented within the formalism. At each level
(specific knowledge and strategies) there would be a separate USER BEING.
Its VALUE part could indicate the costs and 
desirability of  querying the user at this level.
The actual translation could be by efficient environmental 
functions called by these BEINGs.

.SKIP TO COLUMN 1
⊗5↓_Desired Behaviors_↓⊗*

The conception of the project is to build a system that learns and does
mathematics by creating and maintaining and using "good" internal
organizations of its knowledge.
The sorts of behavior envisioned (evidences of successful assimilation and of
intuitive behavior) are:


Accepting and filing new information in a useful, connected manner,

Giving quick judgements (short of formal proof or disproof) about the truth
of conjectures,

Proposing reasonable (not necessarily true) new conjectures,

Having a sense of interestingness or worth-of-pursuing,

Weighing evidence for or against claims,

Assessing the difficulty of problems,

Extending and generalizing from examples (inductive inference),

Giving constructive plans for proof/solution to the extent 
that they are present in the intuition,

Adapting dynamicly; readjusting old schemas, shifting, reorganizing,

Effectively mobilizing facts and techniques by using analogy, relevant
features, etc.

Exercising a notion of the relatedness of propositions ⊗4apart⊗* from
the logical notions (probable implication, co-dependence on something
else, support for, interdependence...): to give a convincing argument,
to explain meaningfully, to be convinced or explained to,

Having, maintaining, using, and discovering several organizations 
over the same knowledge for different uses,

Understanding of math as a logical whole with many interconnections;
ability to take different starting points,

Reflecting, clarifying, and interrelating its own content (to name things,
to isolate things, to reorganize itself),

Inventing mini-theories on a topic, to do small-scale research by tying
fragments and observations together into a coherent whole; generalizing
from the results of working on a few problems.

To be taught by various techniques, by anyone who has about an hour's preparation
in the content and the format of the possible dialogues.

.SKIP TO COLUMN 1
⊗5↓_Results_↓⊗*

There are several possible outcomes of all this. Even the most dismal
would yield some information about theory formation. At the optimistic extreme,
the system would yield new theorems in mathematics and new ways of approaching 
existing ones.

The ideal would be for the system to find a useful
redivision of some concepts, and new concepts overlooked by mathematicians.
The next best result would be the re-discovery and re-development of
existing mathematics, but only by being carefully led along the "right"
path. Even here, one should demand that it not be given so much to
start with that the end result is predetermined.

Even if the system never gets beyond the  most elementary levels in each
field, that very failure will indicate for the first time a lower bound
on the magnitude of the theory formation problem. If our best efforts
produce only meager results, we will have to rethink the set of 
strategies over and over again. This might actually result in a better
final set of strategies than if the original set (chosen by introspection)
performs well!  

How much the strategies must adapt as the system proceeds is not known,
and will be learned during the experiment. It is hoped that such notions as
"how to use the strategies", interestingness, etc., need not evolve as well.
They will be tuneable only by the system's creators. 

If the strategies can generate all the necessary initial intuition from
a tiny hand-selected core, that alone is worth study. No one has studied either
of the two ideas this depends on: that such a basis exists, and that no special
techniques are necessary to expand that core. In general, this will indicate how
a self-contemplative process must differ from a purely investigative process.

.NEXT PAGE
⊗2↓_2. INTERNAL ACTIVITY_↓⊗*

This quite tentative description is meant to get things off the ground.
As more demands are imposed, it may well crumble, hopefully to reveal a better
internal organization.

A ⊗4BEING⊗* is simply a collection of parts, nothing else. 
Each part consists of a name and an internal value. 
The BEING must belong
to a ⊗4family⊗*, and each family member has exactly the same set of parts (the
names are the same; the values vary with the specific BEING).
The value of all parts with the same name must be stored in a known format (which
can vary with the part name); all these formats are described (in a single format).
Just as BEINGs group into families, so some parts group into ⊗4part groupings⊗*.
All family members have the same set of parts names; all parts of a grouping
have some interrelated semantics. That is, if the set of parts were ideally
orthogonal, one wouldn't have any meaningful parts groupings. There is an
⊗4advantage⊗* to the grouping, however: that of factoring. One needn't choose
between an array of 36 parts; rather, make a choice of one of six groupings,
followed by a choice of one of six specific parts depending on the grouping.

To each part corresponds an archetypical BEING, 
giving information about any part having
that name (how to fill it in, when to extend it, etc.) One part of each 
archetypical BEING is called Representation, and describes the format in which each
BEING must keep info. stored inside that part.

A rather specialized ⊗4environment⊗* exists to support these BEINGs, encoded as
efficient opaque functions. The environment must oversee the flow of control
in the system (although the BEINGs themselves make each specific decision as to
who goes next), must include evaluations of belief, interest, supeficiality,
safety, utility; must keep brief statistics on when and how each part of each
BEING is accessed. When a part is big and heavily accessed, detailed records
must be kept of each usage (how, why, when, final result) of each ⊗4subpart⊗*.
Based on this, the part may be split into a group of new BEINGs, and the value
of the part replaced by a pointer to a list of these new BEINGs. 
	The typical
flow of control would follow these patterns:

.B

Decisions
	A. Choose relevant family of BEINGs
	B. Choose relevant subfamily of BEINGs
	C. Choose relevant group of parts
	D. Choose specific relevant BEING
	E. Choose specific relevant part
	F. Work on that part of that BEING
	   If its REPR. is that of several alternative BEINGs, then recurse to step D

Constraints
	A will generally precede all others
	B will generally precede D
	C will generally precede E
	F will generally succeed all others
.E

The environment selects a group of BEINGs, who then fight among themselves
(their RECOG group of parts) to determine exactly who the winner is. If the
winner is himself a node representing a group of BEINGs, that group must also
compete among itself for control, etc. Similarly for deciding which part of the
BEING is relevant. If the rele. part of the rele. BEING is itself a pointer to
a group of BEINGs, then the process recurs (except that the same part of the
new winner is probably wanted). The environment does not make the decisions; it
⊗4does⊗* do two things related to this, though: it decides which of A,B,C,D,E,F
is to be settled next, and then it chooses the entity who
will get to make that selection. The first job is simplified since there are
only six legal orders:
.B
ABCDEF
ABCEDF
ABDCEF
ABDCEF
ACBDEF
ACBEDF
ACEBDF
.E
An exception would be when a specific BEING is known to be wanted; then
A, B, and D are made simultaneously.

The environment would have to accept the returning messages of the attempt to
deal with a certain part of a certain BEING. A success or a failure would mean
backing up to the last decision and re-making it. An "interrupt" from a trial
would mean "here is some possibly more interesting info". The environment
must decide if it is; if not, it returns control to the interrupted process.
If so, it automatically switches to that part of that BEING (the part may
not be specified). Later, there will be no automatic return to the interrupted
process, but whatever sequence of decisions led to its initiaition may very
probably lead there again.
Two tricks are planned here. One is a cache: each BEING will let  its
RECOG parts store the last value computed, and let each
such part have a quick predicate which can
tell if any feature of the world has changed which might affect this value.
If not, then no work is done; the old value is simply
returned. If x is interrupted,
an auxilliary development is begun, and then work on x should continue,
most of the decisions leading back to x will probably not involve any real
work, since most of the world hasn't changed. The second trick is that to
evaluate a part, one moves down its code with a cursor, evalling. When
interrupted, that cursor is left just at the point one wants to start at when
the work resumes.

New BEINGs are created automatically if, when a part is evaluated and a new entity
formed, it has sufficient interest to make it worth keeping by name.
Also, an existing part may be replaced by a list of new BEINGs.
The environment keeps loose checks on the size and usage of each part; if one
ever grows and eats up much time, it is carefully monitored. Eventually, its
subparts may partition into a set whose usage is nearly disjoint. If this
is seen, then the part is actually split into a new set of BEINGs.
If a new BEING doesn't live up to its expectations, it may be executed
summarily (overwritten and forgotten; perhaps enough is remembered to not waste
time later on the same concept).


.SELECT 6
One difference from PUP6 is that here the BEINGs are grouped into families.
Each type has its own set of parts (although there will be many parts present in
many types, e.g. NAME). For each type t there will be an archetypical BEING
B↓t. Under each part of B↓t will be a partially ordered set of strategies
for dealing with that part of that type of BEING (how to fill it out, how to
extend it, what its structure is). Notice we are saying that all the parts with
the same name, of BEINGs of the same type, will all have the same structure.
This is one additional level of structure from the BEINGs proposed in PUP6.

.SELECT 1
The strategies are all organized under specific parts of archetypical BEINGs.
That is, a strategy is pointed to by what part of what BEING type it is related
to filling out. The strategies are (partially) ordered in each such clump. When
the relevant object is part p of BEING B, the type (family name) of
B, say it is t, knows what parts B has. 
Then the sequence of strategies listed under part p
of B↓t is executed. This set is pointed to by part p of B. 

When part p of BEING B is filled out, at some point in the sequence S of strategies
listed under part p of the archetypical BEING with same type as B, some new 
information may be discovered. If S cannot handle this knowledge, then  it will
simply return with the message "I am not through, but here is some fact(s) which
may mean that filling out p is no longer the best activity".
The environment is aware that BEINGs and
parts are both organized into clumps or groupings. When such
an interruption
is reported, the environment will generally pass it on to the clump which
made the last relevancy decision (it first decides if the info. is interesting;
if not, control resumes immediately where it left off). 
A clump may be a part-grouping BEING, a BEING family BEING, a subfamily BEING,...
If the clump regains
control, its first duty is to quickly determine
whether or not it is still the best clump to be in control. If not, it relinquishes
control to the environment,
which asks the clump which called the first one, etc. If it is still the relevant
clump, the decision algorithm outlined above continues on from that point.
The selected part and BEING may turn out the same, or may change due to the new
info which p supplied. 
The flavor of the return should thus be one of: Not Done because x is
possibly more interesting; Not Done because x is a prerequisite to doing me;
Done because I succeeded; Done because I failed utterly.

The lower-level BEINGs will provide fast access to well-organized information.
The background environment provides the necessary evaluation services at
high speeds (though the system cannot meaningfully examine, modify, or add to
what environment functions the cretors provide).
The BEINGs hold "what to think"; the environment implicitly controls "how to think".

Notice once more the structure: several types of BEINGs and parts; 
each type breaks into
several clumps. Each BEING of each type has the same set of parts. Each clump
of BEINGs has members for determining if that clump is still the relevant one.
Each part of a given type of BEING has a distinctive structure, shared by all
parts with the same name, in all BEINGs of the same type. Each type of BEING
has an archetypical representative; the values of its parts specify the structure
formats mentioned in the previous sentence. All archetypical BEINGs' parts have
a single universal format for specifying this information.

Each clump is (at least partially) ordered, hence can be executed sequentially.
The result may be to choose a lower-level clump, and/or modify some strategies
at some level (some part of some BEING), and/or create new strategies at some
level (perhaps even to create a new BEING). These lattter creations and calls
will be in the form of strong suggestions to the environment.

The next step is to list the parts of each of the families of BEINGs, 
followed by filling these in with general strategies for how to fill in such
parts of such BEINGs. Next, all the specific knowledge must be rephrased in
terms of specific knowledge BEINGs.
In doing all this, representation decisions must be
made (e.g., when talking specificly about the INTUITION part, one must know
what formats to expect there). The clumpings at each level are to be treated 
as any other type of BEING; they must have
special knowledge added to decide when the current clump is no longer
the one which should be in control.
At this stage, the system will be ready for hand simulation and then coding.
⊗5The ENVIRONMENT⊗*

There are really only two activities which are performed by the system environment:
.BEGIN NOFILL INDENT 0

COMPLETE(P,B) means fill in material in part P of BEING B. 

1. Locate P and B.
.BEGIN FILL   NARROW 6,0

If P is unknown but B is known, ask B, esp. B.ORDERING; there may be special information
stored in some part(s) by other BEINGs.
If B is unknown but P is known, ask P and ask each β about interest of filling in P.
Each β runs a quick test to see if it is worth doing a detailed examination.
If neither is known, each β must see how rele. it is; the winner decides on P.
If there is more than one β tied for top recognition, then place the results
in order using the environment function ORD, which examines the Worth components
of each, and by using the value of the most promising part to work on next for each
BEING. The frequent access to the (best part, value) pair for each BEING means that
its calculation should be quick; in geeral, each β will recompute it only when new
info. is added to some part, or at rare intervals otherwise.
After ranking this list, chop it off at the first big break in values, and print it out
t othe user to inspect. Pause WAIT seconds, then commence work on the
first in the list. 
WAIT is a parameter set by the user initially. 0 would mean go on unless I interrupt
you, infinity would mean always wait for my reply, etc.
When you finish, don't throw the list away until after the
next B is chosen, since it might turn out to simply be recomputed! If the user
doesn't like the choice you've made, he can interrupt and switch you over.
A similar process occurs if P is unknown, (except the list is never saved).
.END

2. Collect pointers to helpful information. Create a (partialy ordered) plan for B.P.
	This includes the P.FILLIN part, and in fact any existing up↑*(B).P.FILLIN.
	also some use of the representation, defn, views, dom/range parts of that BEING.
	Consult ALGORITHMS and FILLIN parts of B and all upward-tied β's to B.

3. Decide what must be done now; which of the above pieces of information is "best".
	Tag it as having been tried. 
	If it is precisely = one currently active goal, then forget it and go to 3.

4. Carry out the step.  (Evaluate the interest of any new BEING when it is created)
	Notice that the step might in turn call for accessing and (rarely) filling
	in parts of other BEINGs. This activity will be standard heirarchical calling.
	As parts o  other BEINGs are modified, update their (best part, value) estimate.

5. When done: 
	Update statistics in B, P, and current situation. (worth and recog parts)
	If we are through dealing with B.P (because of higher interest entity ∃,
	or because the part is filled in enough for now) goto 1; else goto 3.
	If you stop because of higher interest entity, save the plan for P.B inside P.B.
.SKIP TO COLUMN 1
ACCESS(K,P,B) means access pieces of knowledge K from part P of BEING B.

1. Locate each argument
	Typically given K. Find P' by asking archetypes, B' by asking all BEINGs.
	By iterating through this loop, the sets P' and B' will become singletons.
	The smaller they become, the more effort can be spent on distinguishing the choice.
2. Interpret the material in part P of BEING B.
	Use the representation part of P. 
3. Match K to this pattern, and try to extract it directly. 
	Often this will entail evalling or applying B.P.
	⊗7Evaluation is viewed as just one technique for processing a clump of knowledge, B.P,
		and extracting the precise bit K which is desired.⊗*
4. If the accession fails, consider P.VIEWS, consider setting up a message, consider
	giving up. Let the interest of the current goal be your guide.
	When the access operation is over, return control to the β who hierarchically called this.


CURRENT SITUATION is a vector of weights and features of the recent behavior of the system.
.END

In addition to these two operations, the environment also maintains a list of records
and statistics of the recent past activities, CS, current situation.
Each Recognition grouping part is prefaced by a vector of numbers which are
dot-multiplied into CS, to produce a rapid rough guess of relevance.
Only the best performers are examined more closely for relevance.
The representation of each component is (identification info, motivation,
safety, interest, work done so far on it, final result or outlook). The
actual components might be:
.BEGIN NOFILL
Recent Accesses.   For each, save (B, P, contents of subpart used).
Recent Fillins.    Save (B, P, old contents which were altered).
Current Hierarchical History Stack.  Save  (B, P, why).
Recent Top-level B,P pairs.
A couple significant recent but not current hierarchical (B,P,why) records.
A backward-sorted list of the most interesting but deferred (B,P) fillins.
A few recent or collossal fiascos (B, P, what, why this was a huge waste).


ORD(B,C)  Which of the recognition-tied BEINGs B,C is potentially more worthwhile?

.END

This simple ordering function will probably examine the Worth vectors,  perhaps
involving the sum of weighted factors, perhaps even cross-terms such as
(probability of success)*(interest rating).

.BEGIN NOFILL INDENT 0

PLAUSIBILITY(z)       How believable is z?    INTEREST(z)    How interesting is z?

         each statement has a probability weight attached to it
         this number is a fn. of a list of justifications
         if there are several alternate justifs., it is more plausible
         if some consequences are verified, it is more plaus.
         if an analogous prop. is verified, it is more plaus.
         if the consequences of analogue are verif., it is slightly more plaus.
         the converses of the above also hold
         nothing is certain unless it has been (dis)proved
         believe in those things with high enough prob.
         this level should fluctuate, remaiing just high enough so that
            no contradictions are believed in
         the higher the prob., the higher the reliability
         the amt. one bets should be prop. to the reliability
         the interest increases as the odds do
         Zadeh: p(∧) is min, p(⊗6∨⊗*) is max, p(¬) is 1-.
         Hintikka's formulae (λ, α)
         Carnap's formulas (λ)
         p=1 iff both the start and the methods are certain
         p=0 iff both start is false and method is false-preserving
         if ∃ several alternative plaus. justifs., p is higher
         don't update p value unless you have to
         update p values of contradictory props.
         update p values of new props
         maybe update p value if it is a reason for a new prop
      empiricism, experiment, random sampling, statistics
         true ideas can be verified in all experiments
         false ideas may only have a single exceptional case
         nature is fair, uniform, nice, regular
         more plaus. the more cases verified
         more plaus. the more diff. types of cases verified
         central tendency (mean, mode, median)
         standard deviation, normal distribution
         other distributions (binomial, Poisson, flat, bimodal)
         statistical formulae for significance of hypothesis
      regularity, order, form, arrangement
         economy of description means regularity exists
         aesthetic desc (ana. to known descs. elsewhere)
         each part of desc. is organized regularly
         the parts are related regularly

   Below, α means ⊗4increases with increasing⊗* (proportionality), and
   α↑-↑1 means ⊗4decreases with increasing⊗* (inversely proportional).

   Completeness of an analogy  α  safety of using it for prediction
   Completeness of an analogy  α↑-↑1 how interesting it is
   How expected a relationship is  α↑-↑1  how interesting it is
   How intuitive a conjec/relationship is  α↑-↑1  how interesting it is
   How intuitive a conjec/relationship is  α  how certain/safe it is
   How superficial something is  α  how intuitive it is
   How superficial something is  α  how certain it is
   How superficial something is  α↑-↑1 how interesting it is

   Also included here should be algorithms for applying these rules
   to choosing the best strategies, as a function of the situation.

   Crude estimate of interest level is the interest component of the eval part
   Modify this estimate in close cases using the above relations
   Generally, choose the most specific strategies possible
   If the estimated value of applying one of these falls too low, try a more general one
   Rework the current B. slightly, if that enables a much more specific strategy to be used
   Locate specific concepts which partially instantiate general strategies
   The more specific new strategies are associated with the specific info. used
   Once chosen, use the strategies on the most promising specific information
   If a strat. falters: Collect the names of the specific, needed but blank parts
      Each such absence lowers int. and raises cost, and may cause switch to new strategy
      If too costly, low int, store pointer to partial results in blank parts 
         The partial results maintain set of still-blank needed parts

   Competing goals: On the one hand, desire to maximize certainty,
      safety, complete analogies, advance the level of intuition.
      On the other hand, desire to maximize interestingness, find poss. and poten. interesting ana.
       find unexpected, nonsuperficial, and unintuitive relationships.
   If an entity is used frequently, it should be made efficient.
      Conversely, try to use efficient entities over nearly
      equivalent (w.r.t. given purpose) but inefficient ones.
   If an entity is believed but powerful (unintuitive), its use is
      dangerous but probably very interesting.
   Resolve choices in favor of aesthetic superiority

   Maximize desired effects
      In this case, prefer hi interest over hi safety.
   Minimize costs, conserve resources
      In this case, prefer safety to interest.
      Locate the most inefficient, highest-usage entity, and improve or replace it

   Maximize net behavior
   Generally prefer the "desired effects" types of strategies, not the "minize cost" ones.
   Except: If time/space become a problem, worry about conservation until this relaxes.

.END
.NEXT PAGE
⊗2↓_3. INITIAL KNOWLEDGE_↓⊗*

This section proposes a corpus of information, some of  which will be carefully
constructed, and all of which should
be present in the system before the user approaches it.
This presentation will be repeated at several levels of detail, so that
the reader will obtain a global view before going into detail.
The deeper the level, the more definite  the assumptions which are needed in
order to fill out the knowledge. Even at the descriptive level in this
document, some representation decisions had to be tentatively assumed.
	There are two distinct hierarchies or levels of knowledge. The one
which dominates this organization is specific facts → strategies → environment.
Each level
contains rules and hints for handling the entities one level lower.  
	The
second kind of hierarchy exists at one level, dealing with objects of more
and more generality. For example, "tie new concept in to existing ones" is
at the same level yet subsumes "consider composition of f and existing fns."
Thus there might be some grammar, some generation scheme,
whereby general knowledge (at a given level) combines with specific information
(at any level) to produce tailored, specialized
knowledge at that level (more concrete but less generally applicable).

Four distinct stages of operation of the system seem to be called for.
First comes a strategy-development phase, where all these special strategies
are grown from a tiny hand-written core.
Next comes a trial exploration phase, where interrelationships
and facts ⊗4independent of any particular mathematical domain⊗*, 
or intuitive
relations which may be domain-specific, are developed.   The system
will finally be ready to confront the user, who feeds in specific facts,
definitions, conventions, and suggestions about a particular domain.
The fourth stage then begins, with the actual user-system dialogue on a
particular branch of mathematics.

⊗5↓_REPRESENTATION_↓⊗*

The system is envisioned as having two quite different levels of 
understanding: definite and intuitive.  Some of the knowledge present
initially will be stored in each of these forms.
The actual ways to represent the knowledge, especially
intuitive knowledge, are of some interest.  As before, the presentation
is repeated at a few different levels of detail.
Since the representation must be known before the knowledge can be understood
in that format, we present first the levels of detail about our representations:

⊗5Representation: Level 1⊗*

DEFINITE knowledge is represented as rules, BEINGs, and opaque functions.
Why the separate representaions?  Some tasks, such as translation
of user inputs, are uninteresting but must be done efficiently; some, such
as choosing the next relevant part to fill in, may be more lavish with time; and
finally, some tasks, such as filling in the details of an analogy, are so
sophisticated and infrequent that a vast amount of time can be "wasted".
Different types of tasks are therefore represented by formalisms
of differing intelligence and efficiency: fast, specific functions for the
quick and easy tasks, rule systems for the medium ones, and BEINGs for the
rare but delicate tasks.
Specific knowledge should be stored with an eye toward meaningful later access,
hence will also be stored as BEINGs.

The BEINGs fall into several categories, depending on what type of knowledge
they possess. 
The rules are arranged in  pools, with several independent
pointer systems to locate rules relevant in various ways. The functions
are compiled code, performing utility, translation, intuition, and
administrative functions. The latter two formalisms are collectivley referred
to as the ⊗4environment⊗*, becuase they form the background for the BEING
activities.

INTUITIVE knowledge is represented as pictures, abstract rules, examples.
Set theory books themselves usually have pictures of blobs, or dots with a
closed curve around them, representing sets. For our purposes, a set will
be represented in many ways.  These include pointer structures for ⊗6ε⊗*, ⊂,
and their inverses; analytic geometric functions dealing with sets as
equations representing regions in the plane; archetypical examples of sets;
a collection of abstract rules for simulating the construction and
manipulation of sets; and, finally, a set might
be intuitively represented as a square in the cartesian plane. 

.BEGIN SELECT 6 SPACING 20 MILLS PREFACE 100 MILLS SKIP 1
	Let us now deal with this square representation in  more detail.
The notions
of intersection, union, complement, setdifference, disjointness, projection
onto each axis, etc. are intuitively available.  Notice that the
sophisticated operations required (e.g., projection) will exist as opaque
functions, totally inaccessable to the rest of the system. This is worth
restating: is fair to write a LISP program (which uses the function
TIMES) whose task is to synthesize code for the function TIMES, so long as
the program does not have access to, does not even know about its use of
TIMES. 

.END

This "square" 
representation is not well suited to all concepts. For that reason,
the system will simulataneously maintain several of the other forms of
intuitive storage mentioned previously.  Consider, for example, the
possibility of fuzzy rules, which can latch onto almost anything
and produce some type of result (but with low certainty). That is, they
operate at a higher level of abstraction than definite rules, by ignoring
many details. Another possibility is the use of examples. If a small set of
them can be found which is truly representative of a concept, then future
references to that concept can be compared to these examples.  This may
sound very crude, but I believe that people rely heavily (and
successfully!) on it.

.SKIP TO COLUMN 1
⊗5↓_Initial Knowledge: Level 1_↓⊗*

⊗5BEINGs present initially⊗*

The following is a sketch of how the top few levels of knowledge in the system
are organized. Each node in the right lower section is both a BEING and the  
archetyical representative of a group of BEINGs. The parts of the BEINGs are also
each represented by BEINGs in the system, though not in the picture below.

.BEGIN SELECT 8 NOFILL PREFACE 0 MILLS TURN OFF "↑↓" 

				    Knowledge
   				        ~
           ⊂ααααααααααααααααααααααααααααβααααααααααααααααααααααααααααααα⊃
           ↓                            ↓                               ↓
      Environment                      Meta                         Specific
           ~                            ~                               ~
           ~           	          ⊂ααααα∀ααααα⊃         ⊂αααααααα παααααα∀ααααααα⊃
           ↓                      ↓	      ↓		↓        ↓	        ↓
        Control                Active      Static     Parts   Relation        Object
    			          ~           ~         ~        ~              ~
     ⊂ααααααααπααααααααααπααααααααλ           ~    	↓        ↓              ↓
     ↓        ↓          ↓        ↓           ~       Recog    Order          (Tree)
   Assume   Guess      Test   Communicate     ~       Alter      =             Set
     ⊂ααααααπααααααααπααααααααπααααπααααααααααλ	       Act       ⊗6ε⊗*             Bag
     ↓      ↓        ↓        ↓    ↓          ↓	      Info       ⊗1⊂⊗*             List
Counterex. Msg. Assumption   Pf.  Thm.  Conjecture               ∨,∧           Axioms
	                                                        @,⊗6↔⊗*           (a,b)
							         ⊗6¬⊗*             Hist
							        ⊗1∪,∩⊗*		   Oset

⊗1This could be carried on further beneath each node. For example, under "proof":⊗*

					  Proof
					    ~     	     
		     ⊂αααααααααααααααααααααα∀αααααααααααα⊃  
		     ↓                                   ↓
		Universal			      Existential
	     	     ~					 ~
		⊂αααα∀αααα⊃		⊂αααααααααααααααα∀ααααααααα⊃
		↓         ↓		↓			   ↓
	    Direct   Indirect	     Direct		       Indirect
					~			   ~
			   ⊂αααααααααααα∀ααααααα⊃	⊂αααααααααα∀αααααααααα⊃
			   ↓			↓	↓		      ↓
			Constructive     Deductive    Constructive     Deductive

.END

In most cases, however, the subsidiary information should be simple BEING parts, not
new BEINGs. For this reason we terminate our graph at this level. After describing
the BEING parts in the next level of representation, we shall return and present
the tentative knowledge stored in each part of each BEING initially in the system.

Notice at this point that some of the supposedly high-level nodes may in fact be
easily generable from more primitive ones. For example, most of the types of
properties a relation can possess fall here. Consider ⊗4Surjection⊗*: it is the
coincidence of two sets: the range and the image. Consider ⊗4Injection⊗*: it is the
fact that the inverse has the interesting property "function". ⊗4Bijection⊗* is the
coincidence of the previous two, and has an interesting intuitive interpretation
(1-1 correspondent matching) which makes it worth keeping as a separate BEING.
Consider ⊗4Function⊗*: it is when every element's image has the very special
property "singleton". These coincidences could easily be proposed, in some order,
and examined. Any which deserved to exist as separate named concepts would be
made into BEINGs, the rest forgotten. Such justifications might include special
simplifications, interesting new properties observed, a new way of intuitively
viewing the situation, etc. 
Even the idea of an ⊗4Inverse⊗* can be discovered from the primitive concept of
reversing the order of an ordered pair. This latter idea probably cannot be
synthesized from more basic ideas, hence ⊗4must⊗* be inserted by hand initially.

Another point to notice is that testing and inferring are separated from the
by-products of theimselves, namely conjectures, proofs, and counterexamples.
The former are things to do, the latter are objects which are static.
One can use a theorem, e.g., without remembering or caring how it was proved.

Although conjectures are far removed from belief (in the tree), the environmental
routines permeate throughout temporal and arboreal space. Belief and interest
are constantly being evaluated.

.SKIP 3
⊗5↓_Representation: Level 2_↓⊗*

DEFINITE KNOWLEDGE

A proposed set of BEING parts follows. Most of them occur in more than one
of the many families of BEINGs. One of these is
created whenever a new idea becomes explicit.
Part of the driving force of the system is the urge to ⊗4complete⊗*
each BEING.  

The four pictures below indicate the four main parts groupings, which in turn
reflect the four reasons for calling on a BEING or a part of one:
to see if it is relevant, to modify itself in some way, to deal with a
supplied argument (some part of some other BEING), or simply to answer a question
(accessable information). Under each category are several distinct parts and
in some cases further groupings of parts. Each grouping is itself a BEING; each
part is also represented by one archetypical BEING. In any given case, however,
the value stored in part of a BEING is simply some rules, pointers, numbers, etc.
The exact format is specified in the REPRESENTATION part of the archetypical BEING
whose name is that part.

.BEGIN SELECT 8 NOFILL PREFACE 0 MILLS TURN OFF "↑↓"

.B

			⊂ααααααα⊃
			~ RECOG ~
		        %αααπααα$
			    ~
		   ⊂αααααπαα∀ααπααααα⊃
		   ↓     ↓     ↓     ↓
    	      Changes  Final  Past  Iden

.E
.BEGIN FILL SELECT 1 PREFACE 100 MILLS
The RECOG grouping is concerned with handling the following types of questions:
Are you relevant to this change in the world..., Can you bring about this
state of the world..., How successful were you in situations similar to the
current one..., 
Can you recognize this phrase...
These four  types of questions are handled respectively
by the CHANGES, FINAL, PAST-USE, and IDEN parts.
.END
.B

			⊂ααααααα⊃
			~ ALTER ~
			~  self ~
			%αααπααα$
			    ~
			    ~
	⊂αααααααπαααααααααααβαααααααααααπααααααααπαααααααα⊃
	↓	↓	    ↓		↓	 ↓	  ↓
Generalize  Specialize  Boundary    Ordering   Worth     Ops
			    ↓                    ↓	 
			Dom/Range             Interest
.E
.BEGIN FILL SELECT 1 PREFACE 100 MILLS
The ALTER grouping is concerned with handling the following types of questions:
What is the boundary of the current concept? Why does it exist; why can't you
relax some constraint and generalize yourself? Is there anything interesting
happening when you specialize yourself; how ⊗4can⊗* you specialize yourself?
How incomplete are you; what part should be attended to next? Are you worth
surviving; why, what good are you?  What can (can't) be done to you?
These types of questions are handled respectively
by the Boundary, Generalize, Specialize, Ordering, Worth,and Ops parts.
.END
.B

			⊂ααααααα⊃
			~ ACT w.~
			~ other ~
			εαααααααλ
			/       \
		       /         \
		      /	          \
		     /		   \
		    /		    \
	    Interpret		     Change
	    /       \		    /  ~   \
Representation     Check   Structure Fillin Bounday-operations

.E
.BEGIN FILL SELECT 1 PREFACE 100 MILLS
The ACT grouping is concerned with handling the following types of questions:
How can this entity be pulled across your boundary? (Boundary operators part).
Most of the rest of the questions deal with BEINGs which
represent a part: whether to check to see if this part might be too unstructured;
if so,⊗4how⊗* to check this; if indicated, how interesting should
the subpart(s) be before actually doing something; to act, do we split or
merely restructure (Structure part).
What is the format of a typical one of you? (Representation part).
How much of this has been filled in so far? How do I fill in
some more? (Check, Fillin).
In general, there are two kinds of requests here. One is for actually changing
a part whose name is the name of this BEING (use the Change subgrouping). The other
kind of job is simply one of interpreting some aspect of such a part
(the Interpret subgrouping of parts).
.END
.B

			⊂ααααααα⊃
			~  INFO ~
			%αααπααα$
			    ~
			    ~
                ⊂αααααααααααβααααααπααααααα⊃
	  	↓	    ↓      ↓       ↓
	  Definition  Intuition  Ties  Examples
				   ~
	    ⊂αααααααααπααααααααααααβαααααααααααααα⊃
   	    ↓         ↓            ↓              ↓
          Analogy  Family  Alternatives Related-objects(thms, conjecs, axioms)

.E
.END

.BEGIN FILL SELECT 1 PREFACE 100 MILLS
The INFO grouping is concerned with handling types of questions dealing with
ubiquitous facts about this BEING. These include categories which are
needed by more than one of the preceding threee groupings, those needed in
several different ways, those which other BEINGs might want to inspect, etc.
The names of the parts in the picture are self-explanatory.
.END

.TURN OFF "{}"
.BEGIN NOFILL GROUP

⊗4↓_Parts which every BEING might have:_↓⊗*


⊗5RECOGNITION GROUPING⊗*
 CHANGES		Is this rele. to producing the desired change in the world?
 FINAL  		What situations is this β rele. to bringing about?
 PAST			Where is this used frequently, to advantage?
 IDEN {not}{quick}	{fast} tests to see if this β is {not} currently referred to

⊗5ALTER GROUPING⊗*
 GENERALIZATIONS	What is this a special case of? How to make this more general.
 SPECIALIZATIONS	Special cases of this? What new properties exist only there?
 BOUNDARY		What marks the limits of this concept? Why exactly there?
 ORDERING(Complete)	What order should the parts be concentrated on (default)
 WORTH	Aesthetic, efficency, complexity, ubiquity, certainty, analogic utility, survival basis
 INTEREST		What special factors make this type of BEING interesting?
 JUSTIFICATION  Why do you believe this? Formal and intuitive. What has been tried already?
 OPERATIONS		Associated with β. What can one do to it, what happens then?

⊗5ACT GROUPING⊗*
 BOUNDARY-OPERATIONS {not}  Ops rele. to patching {messing}up not-bdy-entities {bdy-entities}
 VIEWS			How to view this as another type of entity.
 REPRESENTATION		How should entities of this type be represented internally?
 
⊗5INFO GROUPING⊗*
 DEFINITION		Several alternative formal definitions of this concept.
 INTU		Analogic interp., ties to simpler objects, to reality. Opaque.
 TIES   	Alterns. Parents/offspring. Analogies. Associated thms, conjecs, axioms, specific β's.
 EXAMPLES {not} {bdy}	Includes trivial, typical, and advanced cases of each type.
 CONTENTS 	 What is the value stored here, the actual contents of this entity.
.APART END

.BEGIN NOFILL GROUP

↓_⊗4Parts which RELATION and ACTIVE META BEINGs can possess, which not all other BEINGs have:⊗*_↓

DOMAIN/RANGE {not}     Found in the Alter grouping, associated with the Bounday part.
	Collection of (what one can{'t} apply it to, what kind of thing one {never} gets)
ALGORITHMS  (In ACT.CHANGE) How to compute this function. Related to Repr.
.APART END
.BEGIN NOFILL GROUP


↓_⊗4Additional parts possessed by archetypical BEINGs whose names are part names:⊗*_↓

FILLIN (Act.Change) How to initially fill it in, when and how to augment what is there already.
STRUCTURE (Act.Change)  Whether, When, How to retructure (or split) this part.
 Under INTERPRET subgrouping of parts
CHECK (Act.Interpret)	How to examine and test out what is already there.
.APART END

.FILL

The contents of each part of each BEING should be simple; either another
BEING, a trivial program, a set of rules.  At least at the lower levels,
each BEING part p should
have a specific structure, which should be described in the REPRESENTATION part
of the (strategy) BEING whose name is p.

Common knowledge should in some cases be factored out. Possibilities:
(i) always ask a specific BEING, who sometimes queries a more general
one if some knowledge is missing; (ii) always query the most general
BEING relevant, who then asks some specific ones (This sounds bad);
(iii) ask all the BEINGS pseudo-simultaneously, and examine the
responders (this sounds too costly.) The organization of BEINGs into
hierarchical groupings reflects the spirit of (ii). A BEING only
contains additions and exceptions to what its generalization contains.

At some level, assumedly in the environmental function, a small collection of rules
will be all that is required. One might view this change as gradual, as the
number of BEING parts decreases with the level increasing. By the final level,
what is left is truly more of a structured rule than a BEING.
A scheme for organizing the pointer systems for RULES now follows.
Each rule will have several types of pointers, to indicate relevant
rules. One set might be as follows:

.BEGIN NOFILL GROUP

ABSOLUTE  The rules pointed to here should definitely be examined.
SUCCESS   If this rule succeeds, then look at these anyway.
FAILURE   If this rule fails, by a little, then look at these. (More descriptive, perhaps).
EXTEND    If a more comprehensive result is desired
CONTRACT  If a more restricted, simpler result is desired.
WORTH     What is this rule's expense of execution? Its chance of success?
          Point to cheaper rules/functions; point to costlier rules/BEINGS?
INTU      Point to abstract intuitive rules relevant to this rule.
DEF       Point to less abstract rules which are related to this one.

.APART END

Notice that the rule parts are simpler, fewer, and more uniform than the set
of BEING parts. A simple pool of unstructured rules might be all that is needed
(situation-action productions).

At the very highest level, the environment will consist of absolutely opaque
functions, coded for efficiency, which perform "primitive" functions absent in
INTERLISP but desirable for our system.
The precise representation of the efficient functions is not important,
since they are completely opaque to the rest of the system. Access to a
compiler should probably be permitted; once the system has an algorithm
to do something, there is no reason why it shouldn't be allowed to point
to a compiled routine for the same algorithm.

.SKIP 4

INTUITIVE KNOWLEDGE

The pictures of sets, mentioned above, are now explained more fully.
Euler, to overcome language problems when lecturing a princess of
Sweden, devised the use of circles to represent sets. Venn and others
have frequently adopted this image. For a machine, it seems more a
propos to use a rectangle, not a circle.  Consider  the lattice of
integral points in two dimensions. Now a set is viewed as a rectangle
-- or a combination of a few rectangles -- in this space. This makes it
hard to get any intuition about continuity or boundary or openness, but
works fine for the discrete sets which are dealt with in logic, 
elementary set theory, arithmetic, number theory, and algebra. It is
probable that the system will therefore not be tried in the domains of
real analysis, geometry, topology, etc. with only this primitive notion
of space and confinement.  Specificly, a set in this world is an
ordered pair of pairs of natural numbers. Projection is thus trivial
in LISP (CAR or CADR), as is test for intersection, subset, etc.
Notice that these require use of numbers, ordering, sets, etc., so the
functions which accomplish them must be opaque.  The only interaction
with the rest of the system will be for these pictures to suggest and
reinforce and veto various conjectures.  They serve to generate
empirical evidence for the rest of the system.
To avoid gerrymandering, it might be necessary to view a set as a list
(of arbitrary length) of ordered pairs; an absent pair can be assumed to be
some default pair. That is, a set is a simplex in Hilbert space; each set has
infinite dimension, but differs from any other in only finitely many of them.


How should the system choose which intuitive representation(s) to use?
Some considerations are: 
	What operations are to be done to this set
(e.g., ⊗6ε⊗*, ⊂, ∩, ∪, ⊗6≡⊗*, =, ',...)? The representations differ in cost of
maintenance and in the ease with which each of these operations can be
carried out. 
	How artificial is the representation for the given set?
Some will be quite natural, e.g., if the set is a nest then use the
pointer structure. 
	How much is "given away" by the model? This is a
question of fairness, and means that the system-writers must build in
constraints (or, alternatively, make the intuitive operations faulty).
	How compatible is each representation with the computer's 
physiology?  Thus it is
almost impossible to represent pictures or blobs directly, but very
suitable to store algebraic equations defining such geometric images.
	Does the representation suggest a set theory with basic elements 
which are non-sets; with an infinite model; with any special desirable or
undesirable qualities? For example, the geometric representation
seems to demand the concept of continuity, which the system probably
won't ever use in any definite way.

⊗5Initial Knowledge: Level 2⊗*

For each BEING in the system initially, we list below its name and give a very
brief suggestion of its role int he grand scheme. The order of the BEINGs below
is the same as in the LEVEL 3 presentation, hence can be used as a sort of
table of contents for that more detailed section.

.BEGIN NOFILL INDENT 0

see GIVEN KNOWLEDGE document, please, for this information.    
.END
.SKIP TO COLUMN 1
⊗5Initial Knowledge: Level 3⊗*

For each BEING, we now present a brief summary of the value stored in each of its
parts. 
If a part name is absent, it is expected that this will ⊗4NEVER⊗* be filled in for
this particular BEING. IF the name is present but there is no value, then the
system might need to (and would, then) fill this part in sometime.

.TURN ON "{}"
.BEGIN
.PAGE FRAME 48 HIGH 84 WIDE
.AREA TEXT LINES 3 TO 47
.AREA FOOTING LINE 48
.AREA HEADING LINE 1
.EVERY HEADING(,,)
.EVERY FOOTING(⊗7{DATE},,page {PAGE})
.!XGPLFTMAR←120
.SPACING 55 MILLS
.PREFACE 160 MILLS
.NOFILL
.PREFACE 17 MILLS
.PAGE←PAGE-1
.NEXT PAGE
.NOFILL
.GROUP SKIP 12
	This data is found in the accompanying document, ⊗4Given Knowledege⊗*.

.END
.FILL
.PAGE FRAME 50 HIGH 84 WIDE
.AREA TEXT LINES 4 TO 48
.AREA HEADING LINES 1 TO 3
.AREA FOOTING LINE 50
.!XGPLFTMAR←200
.SPACING 55 MILLS
.PREFACE 160 MILLS
.NOFILL
.PREFACE 45 MILLS
.FILL
.EVERY HEADING(⊗5MATH THEORY FORMATION⊗*,,⊗4Doug Lenat⊗*)
.EVERY FOOTING(⊗7{DATE},,page {PAGE})
.PAGE←PAGE-1
.NEXT PAGE
⊗2↓_4. COMMUNICATION_↓⊗*

Nothing has been done in this area different from the work described in the
second sketch. There, a list was compiled of the English words which the
system  must be familiar with. The next step is to exhaustively categorize all
words and phrases, and tie each one in to a BEING or rule system. Also, some
fixed language for communicating intutitive information must be devised.

.NEXT PAGE
⊗2↓_5.EXAMPLES_↓⊗*

⊗5Example 1: Contemplation; forming links, intuitive conjectures⊗*

.FILL
.TURN OFF "{}"

Following is the protocol of the first attempt to exercise the given
knowledge.  The system was to be starting for the first time, at the
most removed level of strategies.
.NOFILL

(this was done in Second Sketch; should be redone here using new organization)


⊗5Example 2: Discovering and developing a family of analogies⊗*

(this was done in Second Sketch; should be redone here using new organization)


⊗5Example 3: Formally Investigating an intuitively believed conjecture⊗*

Note: It is difficult to find hard proofs at this low level.


(1) Conjecture: The only relation from 0 to any set X is 0.

Strategy: Conjecture to prove

     Intuitive Justification: Cannot seem to find any place for the arrow to
     come from (i.e. can't draw arrow because can't choose an element from
     the domain because there aren't any)

     Definite: A relation between A and B is a subset of A X B.
               A X B is the set of all ordered pairs <a,b> such that a ⊗6ε⊗* A and b ⊗6ε⊗* B
               An ordered pair <a,b> is the set {{a},{a,b}}.

Strategy: To prove Any α is β, consider any α, show it's β.

     Consider any relation R: 0 → X.  Show it is 0.
     Show all subsets of 0 X X are 0.
     Intuition: All subsets of a set are empty iff the set is empty. (Becomes a
          lemma.)

     Must show 0 X X = 0 for all X.
     This is intuitive.  (Becomes a lemma.) Done.

Strategy: To prove p iff q, prove p implies q and q implies p.  To prove
     p implies q, assume p and the negation of q, and derive a contradiction.

     Now must prove two lemmas, by contradiction:
     (1) Say X is not empty but all its subsets are.  If X is not empty,
         there is some x ⊗6ε⊗* X.  If x ⊗6ε⊗* X then {x} ⊂ X. Contradiction.
         Say X is empty but is has a non-empty subset Y.  If Y is non-
         empty, there is some y ⊗6ε⊗* Y.  By definition, y ⊗6ε⊗* X.  Contradiction.

     (2) 0 X X is the set of all ordered pairs (a, b) such that a ⊗6ε⊗* 0 and
         b ⊗6ε⊗* X. Suppose 0 X X is non-empty.  Then there is such an ordered
         pair.  Then there is an a such that a ⊗6ε⊗* 0.  Contradiction.
              
Popping up, we discover that (1) is now proved.

Try to prove the converse of (1).
	Analogy with last proof (this will actually work) OR
	Establish the easy results in the following sequence:
		if R relates A to B, then R↑-↑1 relates B to A
		if R is the empty relation, then so is R↑-↑1
		if R relates any set X to 0, then R↑-↑1 relates 0 to X
			but by the last theorem, R↑-↑1 must be the empty relation.
		So R must be the empty relation. So the converse is proved.

.SKIP TO COLUMN 1
.FILL
⊗2↓_6. BIBLIOGRAPHY_↓⊗*

⊗5BOOKS and MEMOS⊗*

Allendoerfer, Carl B., and Oakley, Cletis O., ⊗4Principles of
Mathematics⊗*, Third Edition, McGraw-Hill, New York, 1969.

Alexander, Stephen, ⊗4On the Fundamental Principles of Mathematics⊗*,
B. L. Hamlen, New Haven, 1849.

Aschenbrenner, Karl, ⊗4The Concepts of Value⊗*, D. Reidel Publishing
Company, Dordrecht, Holland, 1971.

Atkin, A. O. L., and Birch, B. J., eds., ⊗4Computers in Number Theory⊗*,
Proceedings of the 1969 SRCA Oxford Symposium, Academic Press, New York, 
1971.

Avey, Albert E., ⊗4The Function and Forms of Thought⊗*, Henry Holt and
Company, New York, 1927.

Badre, Nagib A., ⊗4Computer Learning From English Text⊗*, Memorandum
No. ERL-M372, Electronics Research Laboratory, UCB, December 20, 1972.
Also summarized in ⊗4CLET -- A Computer Program that Learns Arithmetic
from an Elementary Textbook⊗*, IBM Research Report RC 4235, February
21, 1973.

Bahm, A. J., ⊗4Types of Intuition⊗*, University of New Mexico Press,
Albuquerque, New Mexico, 1960.

Banks, J. Houston, ⊗4Elementary-School Mathematics⊗*, Allyn and Bacon,
Boston, 1966.

Berkeley, Hastings, ⊗4Mysticism in Modern Mathematics⊗*, Oxford U. Press,
London, 1910.

Beth, Evert W., and Piaget, Jean, ⊗4Mathematical Epistemology and
Psychology⊗*, Gordon and Breach, New York, 1966.

Black, Max, ⊗4Margins of Precision⊗*, Cornell University Press,
Ithaca, New York, 1970.

Blackburn, Simon, ⊗4Reason and Prediction⊗*, Cambridge University Press,
Cambridge, 1973.

Bruner, Jerome S., Goodnow, J. J., and Austin, G. A., ⊗4A Study of
Thinking⊗*, Harvard Cognition Project, John Wiley & Sons,
New York, 1956.

Charosh, Mannis, ⊗4Mathematical Challenges⊗*, NCTM, Wahington, D.C., 1965.

Copeland, Richard W., ⊗4How Children Learn Mathematics⊗*, The MacMillan
Company, London, 1970.

Courant, Richard, and Robins, Herbert, ⊗4What is Mathematics⊗*, 
Oxford University Press, New York, 1941.

D'Augustine, Charles, ⊗4Multiple Methods of Teaching Mathematics in the
Elementary School⊗*, Harper & Row, New York, 1968.

Douglas, Mary (ed.), ⊗4Rules and Meanings⊗*, Penguin Education,
Baltimore, Md., 1973.

Dubin, Robert, ⊗4Theory Building⊗*, The Free Press, New York,  1969.

Dubs, Homer H., ⊗4Rational Induction⊗*, U. of Chicago Press, Chicago, 1930.

Dudley, Underwood, ⊗4Elementary Number Theory⊗*, W. H. Freeman and
Company, San Francisco, 1969.

Eynden, Charles Vanden, ⊗4Number Theory: An Introduction to Proof⊗*, 
International Textbook Comapny, Scranton, Pennsylvania, 1970.

Fuller, R. Buckminster, ⊗4Intuition⊗*, Doubleday, Garden City, New York,
1972.

GCMP, ⊗4Key Topics in Mathematics⊗*, Science Research Associates,
Palo Alto, 1965.

Goldstein, Ira, ⊗4Elementary Geometry Theorem Proving⊗*, MIT AI Memo 280,
April, 1973.

Goodstein, R. L., ⊗4Fundamental COncepts of Mathematics⊗*, Pergamon Press, 
New York, 1962.

Hadamard, Jaques, ⊗4The Psychology of Invention in the Mathematical
Field⊗*, Dover Publications, New York, 1945.

Halmos, Paul R., ⊗4Naive Set Theory⊗*, D. Van Nostrand Co., 
Princeton, 1960.

Hartman, Robert S., ⊗4The Structure of Value: Foundations of Scientific
Axiology⊗*, Southern Illinois University Press, Carbondale, Ill., 1967.

Hempel, Carl G., ⊗4Fundamentals of Concept Formation in Empirical
Science⊗*, University of Chicago Press, Chicago, 1952.

Hibben, John Grier, ⊗4Inductive Logic⊗*, Charles Scribner's Sons,
New York, 1896.

Hilpinen, Risto, ⊗4Rules of Acceptance and Inductive Logic⊗*, Acta
Philosophica Fennica, Fasc. 22, North-Holland Publishing Company,
Amsterdam, 1968.

Hintikka, Jaako, and Suppes, Patrick (eds.), ⊗4Aspects of Intductive
Logic⊗*, North-Holland Publishing Company, Amsterdam, 1966.

Jouvenal, Bertrand de, ⊗4The Art of Conjecture⊗*, Basic Books, Inc.,
New York, 1967.

Kershner, R.B., and L.R.Wilcox, ⊗4The Anatomy of Mathematics⊗*, The Ronald
Press Company, New York, 1950.

Klauder, Francis J., ⊗4The Wonder of Intelligence⊗*, Christopher
Publishing House, North QUincy, Mass., 1973.

Korner, Stephan, ⊗4Conceptual Thinking⊗*, Dover Publications, New York,
1959.

Kubinski, Tadeusz, ⊗4On Structurality of Rules of Inference⊗*, Prace
Wroclawskiego Towarzystwa Naukowego, Seria A, Nr. 107, Worclaw, 
Poland, 1965.

Lakatos, Imre (ed.), ⊗4The Problem of Inductive Logic⊗*, North-Holland 
Publishing Co., Amsterdam, 1968.

Lamon, William E., ⊗4Learning and the Nature of Mathematiccs⊗*, Science
Research Associates, Palo Alto, 1972.

Lefrancois, Guy R., ⊗4Psychological Theories and Human Learning⊗*, 1972.

Le Lionnais, F., ⊗4Great Currents of Mathematical Thought⊗*, Dover
Publications, New York, 1971.

Margenau, Henry, ⊗4Integrative Principles of Modern Thought⊗*, Gordon
and Breach, New York, 1972.

Martin, James, ⊗4Design of Man-Computer Dialogues⊗*, Prentice-Hall, Inc.,
Englewood Cliffs, N. J., 1973.

Martin, R. M., ⊗4Toward a Systematic Pragmatics⊗*, North Holland Publishing
Company, Amsterdam, 1959.

Meyer, Jerome S., ⊗4Fun With Mathematics⊗*, Fawcett Publications,
Greenwich, Connecticut, 1952.

Mirsky, L., ⊗4Studies in Pure Mathematics⊗*, Academic Press, New
York, 1971.

Moore, Robert C., ⊗4D-SCRIPT: A Computational Theory of Descriptions⊗*,
MIT AI Memo 278, February, 1973.

National Council of Teachers of Mathematics, ⊗4The Growth of Mathematical
Ideas⊗*, 24th yearbook, NCTM, Washington, D.C., 1959.

Newell, Allen, and Simon, Herbert, ⊗4Human Problem Solving⊗*, 1972.

Nevins, Arthur J., ⊗4A Human Oriented Logic for Automatic Theorem
Proving⊗*, MIT AI Memo 268, October, 1972.

Niven, Ivan, and Zuckerman, Herbert, ⊗4An Introduction to the Theory
of Numbers⊗*, John Wiley & Sons, Inc., New York, 1960.

Olson, Robert G., ⊗4Meaning and Argument⊗*, Harcourt, Brace & World,
New York, 1969.

Ore, Oystein, ⊗4Number Theory and its History⊗*, McGraw-Hill, 
New York, 1948.

Pietarinen, Juhani, ⊗4Lawlikeness, Analogy, and Inductive Logic⊗*,
North-Holland, Amsterdam, published as v. 26 of the series
Acta Philosophica Fennica (J. Hintikka, ed.), 1972.

Poincare', Henri, ⊗4The Foundations of Science: Science and Hypothesis,
The Value of Science, Science and Method⊗*, The Science Press, New York,
1929.

Polya, George, ⊗4Mathematics and Plausible Reasoning⊗*, Princeton
University Press, Princeton, Vol. 1, 1954;  Vol. 2, 1954.

Polya, George, ⊗4How To Solve It⊗*, Second Edition, Doubleday Anchor Books, 
Garden City, New York, 1957.

Polya, George, ⊗4Mathematical Discovery⊗*, John Wiley & Sons,
New York, Vol. 1, 1962; Vol. 2, 1965.

Richardson, Robert P., and Edward H. Landis, ⊗4Fundamental Conceptions of
Modern Mathematics⊗*, The Open Court Publishing Company, Chicago, 1916.

Rosskopf, Steffe, Taback  (eds.), ⊗4Piagetian Cognitive-
Development Research and Mathematical Education⊗*,
National Council of Teachers of Mathematics, New York, 1971.

Saaty, Thomas L., and Weyl, F. Joachim (eds.), ⊗4The Spirit and the Uses
of the Mathematical Sciences⊗*, McGraw-Hill Book Company, New York, 1969.

Schminke, C. W., and Arnold, William R., eds., ⊗4Mathematics is a Verb⊗*,
The Dryden Press, Hinsdale, Illinois, 1971.

Singh, Jagjit, ⊗4Great Ideas of Modern Mathematics⊗*, Dover Publications,
New York, 1959.

Skemp, Richard R., ⊗4The Psychology of Learning Mathematics⊗*, 
Penguin Books, Ltd., Middlesex, England, 1971.

Stein, Sherman K., ⊗4Mathematics: The Man-Made Universe: An Introduction
to the Spirit of Mathematics⊗*, Second Edition, W. H. Freeman and 
Company, San Francisco,  1969.

Stewart, B. M., ⊗4Theory of Numbers⊗*, The MacMillan Co., New York, 1952.

Stokes, C. Newton, ⊗4Teaching the Meanings of Arithmetic⊗*, 
Appleton-Century-Crofts, New York, 1951.

Suppes, Patrick, ⊗4A Probabilistic Theory 
of Causality⊗*, Acta Philosophica Fennica,
Fasc. 24, North-Holland Publishing Company, Amsterdam, 1970.

Venn, John, ⊗4The Principles of Empirical or Inductive Logic⊗*,
MacMillan and Co., London, 1889.

Waismann, Friedrich, ⊗4Introduction to Mathematical Thinking⊗*, 
Frederick Ungar Publishing Co., New York, 1951.

Wright, Georg H. von, ⊗4A Treatise on Induction and Probability⊗*,
Routledge and Kegan Paul, London, 1951.

⊗5ARTICLES⊗*

Amarel, Saul, ⊗4On Representations of Problems of Reasoning about
Actions⊗*, Machine Intelligence 3, 1968, pp. 131-171.

Bledsoe, W. W., ⊗4Splitting and Reduction Heuristics in Automatic
Theorem Proving⊗*, Artificial Intelligence 2, 1971, pp. 55-77.

Bledsoe and Bruell, Peter, ⊗4A Man-Machine Theorem-Proving System⊗*,
Artificial Intelligence 5, 1974, 51-72.

Buchanan, Feigenbaum, and Sridharan, ⊗4Heuristic Theory Formation⊗*,
Machine Intelligence 7, 1972, pp. 267-...

Bundy, Alan, ⊗4Doing Arithmetic with Diagrams⊗*, 3rd IJCAI, 
1973, pp. 130-138.

Green, Waldinger, Barstow, Elschlager, Lenat, McCune, Shaw, and Steinberg,
⊗4Progress Report on Program-Understanding Systems⊗*, Memo AIM-240,
CS Report STAN-CS-74-444,Artificial Intelligence Laboratory,
Stanford University, August, 1974.

Guard, J. R., et al., ⊗4Semi-Automated Mathematics⊗*, JACM 16,
January, 1969, pp. 49-62.

Hasse, H., ⊗4Mathemakik als Wissenschaft, Kunst und Macht⊗*,
(Mathematics as Science, Art, and Power), Baden-Badeb, 1952.

Hewitt, Carl, ⊗4A Universal Modular ACTOR Formalism for
Artificial Intelligence⊗*, Third International Joint Conference on
Artificial Intelligence,
1973, pp. 235-245.

Menges, Gunter, ⊗4Inference and Decision⊗*, 
A Volume in ⊗4Selecta Statistica Canadiana⊗*,
John Wiley & Sons, New York,  1973, pp. 1-16.

Kling, Robert E., ⊗4A Paradigm for Reasoning by Analogy⊗*,
Artificial Intelligence 2, 1971, pp. 147-178.

Knuth,Donald E., ⊗4Ancient Babylonian Algorithms⊗*,
CACM 15, July, 1972, pp. 671-677.

Lee, Richard C. T., ⊗4Fuzzy Logic and the Resolution Principle⊗*,
JACM 19, January, 1972, pp. 109-119.

McCarthy, John, and Hayes, Patrick, ⊗4Some Philosophical Problems
from the Standpoint of Artificial Intelligence⊗*, Machine Intelligence
4, 1969, pp. 463-502.

Martin, W., and Fateman, R., ⊗4The MACSYMA System⊗*, Second
Symposium on Symbolic and Algebraic Manipulation, 1971, pp. 59-75.

Minsky, Marvin, ⊗4Frames⊗*, in ⊗4Psychology of Computer
Vision⊗*, 1974.

Moore, J., and Newell, ⊗4How Can Merlin Understand?⊗*, Carnegie-Mellon University
Department of Computer Science "preprint", November 15, 1973.

Pager, David, ⊗4A Proposal for a Computer-based Interactive Scientific
Community⊗*, CACM 15, February, 1972, pp. 71-75.

Pager, David, ⊗4On the Problem of Communicating Complex Information⊗*,
CACM 16, May, 1973, pp. 275-281.

Sloman, Aaron, ⊗4Interactions Between Philosophy and Artificial 
Intelligence: The Role of Intuition and Non-Logical Reasoning in
Intelligence⊗*, Artificial Intelligence 2, 1971, pp. 209-225.

Sloman, Aaron, ⊗4On Learning about Numbers⊗*,...

Teitelman, Warren, ⊗4INTERLISP Reference
Manual⊗*, XEROX PARC, 1974.